Efficient Federated Learning for AIoT Applications Using Knowledge Distillation
نویسندگان
چکیده
As a promising distributed machine learning paradigm, federated (FL) trains central model with decentralized data without compromising user privacy, which makes it widely used by Artificial Intelligence Internet of Things (AIoT) applications. However, the traditional FL suffers from inaccuracy, since local models only using hard labels while useful information incorrect predictions small probabilities is ignored. Although various solutions try to tackle bottleneck FL, most them introduce significant communication overhead, making deployment large-scale AIoT devices great challenge. To address above problem, this article presents novel distillation-based (DFL) method that enables efficient and accurate for By knowledge distillation (KD), in each round training, our approach uploads both soft targets gradients cloud server aggregation, where aggregation results are then dispatched next training. During DFL addition labels, approximate targets, can improve accuracy leveraging targets. further performance, we design dynamic adjustment strategy loss function weights tuning ratio KD maximize synergy between labels. Comprehensive experimental on well-known benchmarks show significantly introducing overhead.
منابع مشابه
Learning Efficient Object Detection Models with Knowledge Distillation
Despite significant accuracy improvement in convolutional neural networks (CNN) based object detectors, they often require prohibitive runtimes to process an image for real-time applications. State-of-the-art models often use very deep networks with a large number of floating point operations. Efforts such as model compression learn compact models with fewer number of parameters, but with much ...
متن کاملLearning Loss for Knowledge Distillation with Conditional Adversarial Networks
There is an increasing interest on accelerating neural networks for real-time applications. We study the studentteacher strategy, in which a small and fast student network is trained with the auxiliary information provided by a large and accurate teacher network. We use conditional adversarial networks to learn the loss function to transfer knowledge from teacher to student. The proposed method...
متن کاملKnowledge distillation using unlabeled mismatched images
Current approaches for Knowledge Distillation (KD) either directly use training data or sample from the training data distribution. In this paper, we demonstrate effectiveness of ’mismatched’ unlabeled stimulus to perform KD for image classification networks. For illustration, we consider scenarios where this is a complete absence of training data, or mismatched stimulus has to be used for augm...
متن کاملEfficient Knowledge Distillation from an Ensemble of Teachers
This paper describes the effectiveness of knowledge distillation using teacher student training for building accurate and compact neural networks. We show that with knowledge distillation, information from multiple acoustic models like very deep VGG networks and Long Short-Term Memory (LSTM) models can be used to train standard convolutional neural network (CNN) acoustic models for a variety of...
متن کاملCoordinating Policy for Federated Applications
At the start of its present term of office in 1997 the UK government published a planning document promising ubiquitous access to Electronic Health Records (EHRs) held within the National Health Service (NHS). If such access is to become a reality then it is essential to guarantee confidentiality, since otherwise the media and the privacy vigilantes will prevent deployment. Among the rights inc...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Internet of Things Journal
سال: 2023
ISSN: ['2372-2541', '2327-4662']
DOI: https://doi.org/10.1109/jiot.2022.3229374